Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(12)2023 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-37420726

RESUMO

This paper proposes the design of a 360° map establishment and real-time simultaneous localization and mapping (SLAM) algorithm based on equirectangular projection. All equirectangular projection images with an aspect ratio of 2:1 are supported for input image types of the proposed system, allowing an unlimited number and arrangement of cameras. Firstly, the proposed system uses dual back-to-back fisheye cameras to capture 360° images, followed by the adoption of the perspective transformation with any yaw degree given to shrink the feature extraction area in order to reduce the computational time, as well as retain the 360° field of view. Secondly, the oriented fast and rotated brief (ORB) feature points extracted from perspective images with a GPU acceleration are used for tracking, mapping, and camera pose estimation in the system. The 360° binary map supports the functions of saving, loading, and online updating to enhance the flexibility, convenience, and stability of the 360° system. The proposed system is also implemented on an nVidia Jetson TX2 embedded platform with 1% accumulated RMS error of 250 m. The average performance of the proposed system achieves 20 frames per second (FPS) in the case with a single-fisheye camera of resolution 1024 × 768, and the system performs panoramic stitching and blending under 1416 × 708 resolution from a dual-fisheye camera at the same time.


Assuntos
Aceleração , Algoritmos , Veículos Autônomos , Registros
2.
Sensors (Basel) ; 23(5)2023 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-36904958

RESUMO

This paper proposes a deep learning-based mmWave radar and RGB camera sensor early fusion method for object detection and tracking and its embedded system realization for ADAS applications. The proposed system can be used not only in ADAS systems but also to be applied to smart Road Side Units (RSU) in transportation systems to monitor real-time traffic flow and warn road users of probable dangerous situations. As the signals of mmWave radar are less affected by bad weather and lighting such as cloudy, sunny, snowy, night-light, and rainy days, it can work efficiently in both normal and adverse conditions. Compared to using an RGB camera alone for object detection and tracking, the early fusion of the mmWave radar and RGB camera technology can make up for the poor performance of the RGB camera when it fails due to bad weather and/or lighting conditions. The proposed method combines the features of radar and RGB cameras and directly outputs the results from an end-to-end trained deep neural network. Additionally, the complexity of the overall system is also reduced such that the proposed method can be implemented on PCs as well as on embedded systems like NVIDIA Jetson Xavier at 17.39 fps.

3.
Sensors (Basel) ; 24(1)2023 Dec 31.
Artigo em Inglês | MEDLINE | ID: mdl-38203111

RESUMO

Advanced driver assistance systems (ADASs) are becoming increasingly common in modern-day vehicles, as they not only improve safety and reduce accidents but also aid in smoother and easier driving. ADASs rely on a variety of sensors such as cameras, radars, lidars, and a combination of sensors, to perceive their surroundings and identify and track objects on the road. The key components of ADASs are object detection, recognition, and tracking algorithms that allow vehicles to identify and track other objects on the road, such as other vehicles, pedestrians, cyclists, obstacles, traffic signs, traffic lights, etc. This information is then used to warn the driver of potential hazards or used by the ADAS itself to take corrective actions to avoid an accident. This paper provides a review of prominent state-of-the-art object detection, recognition, and tracking algorithms used in different functionalities of ADASs. The paper begins by introducing the history and fundamentals of ADASs followed by reviewing recent trends in various ADAS algorithms and their functionalities, along with the datasets employed. The paper concludes by discussing the future of object detection, recognition, and tracking algorithms for ADASs. The paper also discusses the need for more research on object detection, recognition, and tracking in challenging environments, such as those with low visibility or high traffic density.

4.
Sensors (Basel) ; 22(19)2022 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-36236484

RESUMO

This paper proposes a deep learning based object detection method to locate a distant region in an image in real-time. It concentrates on distant objects from a vehicular front camcorder perspective, trying to solve one of the common problems in Advanced Driver Assistance Systems (ADAS) applications, which is, to detect the smaller and faraway objects with the same confidence as those with the bigger and closer objects. This paper presents an efficient multi-scale object detection network, termed as ConcentrateNet to detect a vanishing point and concentrate on the near-distant region. Initially, the object detection model inferencing will produce a larger scale of receptive field detection results and predict a potentially vanishing point location, that is, the farthest location in the frame. Then, the image is cropped near the vanishing point location and processed with the object detection model for second inferencing to obtain distant object detection results. Finally, the two-inferencing results are merged with a specific Non-Maximum Suppression (NMS) method. The proposed network architecture can be employed in most of the object detection models as the proposed model is implemented in some of the state-of-the-art object detection models to check feasibility. Compared with original models using higher resolution input size, ConcentrateNet architecture models use lower resolution input size, with less model complexity, achieving significant precision and recall improvements. Moreover, the proposed ConcentrateNet architecture model is successfully ported onto a low-powered embedded system, NVIDIA Jetson AGX Xavier, suiting the real-time autonomous machines.


Assuntos
Condução de Veículo , Redes Neurais de Computação , Doença Crônica , Coleta de Dados , Humanos
5.
IEEE Trans Neural Netw Learn Syst ; 33(10): 5978-5992, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34310321

RESUMO

This article proposes a hardware-oriented neural network development tool, called Intelligent Vision System Lab (IVS)-Caffe. IVS-Caffe can simulate the hardware behavior of convolution neural network inference calculation. It can quantize weights, input, and output features of convolutional neural network (CNN) and simulate the behavior of multipliers and accumulators calculation to achieve the bit-accurate result. Furthermore, it can test the accuracy of the chosen CNN hardware accelerator. Besides, this article proposes an algorithm to solve the deviation of gradient backpropagation in the bit-accurate quantized multipliers and accumulators. This allows the training of a bit-accurate model and further increases the accuracy of the CNN model at user-designed bit width. The proposed tool takes Faster region based CNN (R-CNN) + Matthew D. Zeiler and Rob Fergus (ZF)-Net, Single Shot MultiBox Detector (SSD) + VGG, SSD + MobileNet, and Tiny you only look once (YOLO) v2 as the experimental models. These models include both one-stage object detection and two-stage object detection models, and base networks include the convolution layer, the fully connected layer, and the modern advanced layers, such as the inception module and depthwise separable convolution. In these experiments, direct quantization of layer-I/O fixed-point models to bit-accurate models will have a 2% mean average precision (mAP) drop of accuracy in the constraint that all layers' accumulators and multipliers are quantized to less or equal to 14 and 12 bit, respectively. After retraining of these quantized models with the proposed IVS-Caffe, we can achieve less than 1% mAP drop in accuracy in the constraint that all layers' accumulators and multipliers are quantized to less or equal to 14 and 11 bit, respectively. With the proposed IVS-Caffe, we can analyze the accuracy of the target model when it is running at hardware accelerators with different bit widths, which is beneficial to fine-tune the target model or customize the hardware accelerators with lower power consumption. Code is available at https://github.com/apple35932003/IVS-Caffe.

6.
Sensors (Basel) ; 21(16)2021 Aug 06.
Artigo em Inglês | MEDLINE | ID: mdl-34450765

RESUMO

A method of direction-of-arrival (DoA) estimation for FMCW (Frequency Modulated Continuous Wave) radar is presented. In addition to MUSIC, which is the popular high-resolution DoA estimation algorithm, deep learning has recently emerged as a very promising alternative. It is proposed in this paper to use a 3D convolutional neural network (CNN) for DoA estimation. The 3D-CNN extracts from the radar data cube spectrum features of the region of interest (RoI) centered on the potential positions of the targets, thereby capturing the spectrum phase shift information, which corresponds to DoA, along the antenna axis. Finally, the results of simulations and experiments are provided to demonstrate the superior performance, as well as the limitations, of the proposed 3D-CNN.

7.
Sensors (Basel) ; 20(18)2020 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-32942628

RESUMO

This paper proposes a deep-learning model with task-specific bounding box regressors (TSBBRs) and conditional back-propagation mechanisms for detection of objects in motion for advanced driver assistance system (ADAS) applications. The proposed model separates the object detection networks for objects of different sizes and applies the proposed algorithm to achieve better detection results for both larger and tinier objects. For larger objects, a neural network with a larger visual receptive field is used to acquire information from larger areas. For the detection of tinier objects, the network of a smaller receptive field utilizes fine grain features. A conditional back-propagation mechanism yields different types of TSBBRs to perform data-driven learning for the set criterion and learn the representation of different object sizes without degrading each other. The design of dual-path object bounding box regressors can simultaneously detect objects in various kinds of dissimilar scales and aspect ratios. Only a single inference of neural network is needed for each frame to support the detection of multiple types of object, such as bicycles, motorbikes, cars, buses, trucks, and pedestrians, and to locate their exact positions. The proposed model was developed and implemented on different NVIDIA devices such as 1080 Ti, DRIVE-PX2 and Jetson TX-2 with the respective processing performance of 67 frames per second (fps), 19.4 fps, and 8.9 fps for the video input of 448 × 448 resolution, respectively. The proposed model can detect objects as small as 13 × 13 pixels and achieves 86.54% accuracy on a publicly available Pascal Visual Object Class (VOC) car database and 82.4% mean average precision (mAP) on a large collection of common road real scenes database (iVS database).

8.
Opt Express ; 27(9): 11877-11901, 2019 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-31052738

RESUMO

Dark Channel Prior (DCP) is one of the significant dehazing methods based upon the observation of the key features of the haze-free images. But it has disadvantages; high computational complexity, over-enhancement in the sky region, flickering artefacts in video processing, and poor dehazing. Therefore, we propose improved solutions to solve the aforementioned drawbacks. First, we adopt the fast one-dimensional filter, look-up table, and program optimization to reduce the computational complexity. Next, we follow by using a part of the guided filter for sky detection and to preserve the sky region from noise by avoiding over recovery. Then, we propose an airlight update strategy and adjust the radius of a guided filter to reduce the flickering artifacts and also propose an airlight estimation method to produce the better dehazing result as the final step of our algorithm. The improved results from our proposed algorithm are stable and are obtained from the real-time processing suitable for ADAS, surveillance, and monitoring systems. The implementation of the proposed algorithm has yielded a processing speed of 75 fps and 23 fps respectively on an NVIDIA Jetson TX1 embedded platform and Renesas R-Car M2, both on D1 (720x480) resolution videos.

9.
Opt Express ; 27(5): 7627-7628, 2019 Mar 04.
Artigo em Inglês | MEDLINE | ID: mdl-30876324

RESUMO

Photonic technologies that support the low cost manufacturing needed for automotive sensors have experienced explosive developments in recent years. To date most commercially available lidar system have been direct detection time-of-flight (ToF) sensors operating at 905 nm using mechanical mirrors for beam steering. However, these sensors suffer from important drawbacks. One issue is eye-safety, which limits maximum laser powers and hence operating range. Direct detection systems must also content with potential interference issues when lots of cars operate lidar systems simultaneously. In addition, mechanical scanners are frequently bulky and may be difficult to integrate within the form factors allowed by modern vehicles.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...